18 research outputs found

    Generic Drone Control Platform for Autonomous Capture of Cinema Scenes

    Full text link
    The movie industry has been using Unmanned Aerial Vehicles as a new tool to produce more and more complex and aesthetic camera shots. However, the shooting process currently rely on manual control of the drones which makes it difficult and sometimes inconvenient to work with. In this paper we address the lack of autonomous system to operate generic rotary-wing drones for shooting purposes. We propose a global control architecture based on a high-level generic API used by many UAV. Our solution integrates a compound and coupled model of a generic rotary-wing drone and a Full State Feedback strategy. To address the specific task of capturing cinema scenes, we combine the control architecture with an automatic camera path planning approach that encompasses cinematographic techniques. The possibilities offered by our system are demonstrated through a series of experiments

    Fault-tolerant formation driving mechanism designed for heterogeneous MAVs-UGVs groups

    Get PDF
    A fault-tolerant method for stabilization and navigation of 3D heterogeneous formations is proposed in this paper. The presented Model Predictive Control (MPC) based approach enables to deploy compact formations of closely cooperating autonomous aerial and ground robots in surveillance scenarios without the necessity of a precise external localization. Instead, the proposed method relies on a top-view visual relative localization provided by the micro aerial vehicles flying above the ground robots and on a simple yet stable visual based navigation using images from an onboard monocular camera. The MPC based schema together with a fault detection and recovery mechanism provide a robust solution applicable in complex environments with static and dynamic obstacles. The core of the proposed leader-follower based formation driving method consists in a representation of the entire 3D formation as a convex hull projected along a desired path that has to be followed by the group. Such an approach provides non-collision solution and respects requirements of the direct visibility between the team members. The uninterrupted visibility is crucial for the employed top-view localization and therefore for the stabilization of the group. The proposed formation driving method and the fault recovery mechanisms are verified by simulations and hardware experiments presented in the paper

    A practical multirobot localization system

    Get PDF
    We present a fast and precise vision-based software intended for multiple robot localization. The core component of the software is a novel and efficient algorithm for black and white pattern detection. The method is robust to variable lighting conditions, achieves sub-pixel precision and its computational complexity is independent of the processed image size. With off-the-shelf computational equipment and low-cost cameras, the core algorithm is able to process hundreds of images per second while tracking hundreds of objects with a millimeter precision. In addition, we present the method's mathematical model, which allows to estimate the expected localization precision, area of coverage, and processing speed from the camera's intrinsic parameters and hardware's processing capacity. The correctness of the presented model and performance of the algorithm in real-world conditions is verified in several experiments. Apart from the method description, we also make its source code public at \emph{http://purl.org/robotics/whycon}; so, it can be used as an enabling technology for various mobile robotic problems

    Robot Perception of Static and Dynamic Objects with an Autonomous Floor Scrubber

    Get PDF
    This paper presents the perception system of a new professional cleaning robot for large public places. The proposed system is based on multiple sensors including 3D and 2D lidar, two RGB-D cameras and a stereo camera. The two lidars together with an RGB-D camera are used for dynamic object (human) detection and tracking, while the second RGB-D and stereo camera are used for detection of static objects (dirt and ground objects). A learning and reasoning module for spatial-temporal representation of the environment based on the perception pipeline is also introduced. Furthermore, a new dataset collected with the robot in several public places, including a supermarket, a warehouse and an airport, is released.Baseline results on this dataset for further research and comparison are provided. The proposed system has been fully implemented into the Robot Operating System (ROS) with high modularity, also publicly available to the community

    A Hybrid Visual-Model Based Robot Control Strategy for Micro Ground Robots

    Get PDF
    This paper proposed a hybrid vision-based robot control strategy for micro ground robots by mediating two vision models from mixed categories: a bio-inspired collision avoidance model and a segmentation based target following model. The implemented model coordination strategy is described as a probabilistic model using finite state machine (FSM) that allows the robot to switch behaviours adapting to the acquired visual information. Experiments demonstrated the stability and convergence of the embedded hybrid system by real robots, including the studying of collective behaviour by a swarm of such robots with environment mediation. This research enables micro robots to run visual models with more complexity. Moreover, it showed the possibility to realize aggregation behaviour on micro robots by utilizing vision as the only sensing modality from non-omnidirectional cameras

    Robot deployment in long-term care: a case study of a mobile robot in physical therapy

    Get PDF
    Background. Healthcare systems in industrialised countries are challenged to provide care for a growing number of older adults. Information technology holds the promise of facilitating this process by providing support for care staff, and improving wellbeing of older adults through a variety of support systems. Goal. Little is known about the challenges that arise from the deployment of technology in care settings; yet, the integration of technology into care is one of the core determinants of successful support. In this paper, we discuss challenges and opportunities associated with technology integration in care using the example of a mobile robot to support physical therapy among older adults with cognitive impairment in the European project STRANDS. Results and discussion. We report on technical challenges along with perspectives of physical therapists, and provide an overview of lessons learned which we hope will help inform the work of researchers and practitioners wishing to integrate robotic aids in the caregiving process

    A Poisson-spectral model for modelling temporal patterns in human data observed by a robot

    No full text
    The efficiency of autonomous robots depends on how well they understand their operating environment. While most of the traditional environment models focus on the spatial representation, long-term mobile robot operation in human populated environments requires that the robots have a basic model of human behaviour. We present a framework that allows us to retrieve and represent aggregate human behaviour in large, populated environments on extended temporal scales. Our approach, based on time-varying Poisson process models and spectral analysis, efficiently retrieves long-term, re-occurring patterns of human activity from robot-gathered observations and uses these patterns to i) predict human activity level at particular times and places and ii) classify locations based on their periodic patterns of activity. The application of our framework on real-world data, gathered by a mobile robot operating in an indoor environment for one month, indicates that its predictive capabilities outperform other temporal modelling methods while being computationally more efficient. The experiment also demonstrates that spectral signatures act as features that allow us to classify room types which semantically match with humans’ expectations

    EU long-term dataset with multiple sensors for autonomous driving

    No full text
    The field of autonomous driving has grown tremendously over the past few years, along with the rapid progress in sensor technology. One of the major purposes of using sensors is to provide environment perception for vehicle understanding, learning and reasoning, and ultimately interacting with the environment. In this paper, we first introduce a multisensor platform allowing vehicle to perceive its surroundings and locate itself in a more efficient and accurate way. The platform integrates eleven heterogeneous sensors including various cameras and lidars, a radar, an IMU (Inertial Measurement Unit), and a GPS-RTK (Global Positioning System / Real-Time Kinematic), while exploits a ROS (Robot Operating System) based software to process the sensory data. Then, we present a new dataset (https://epan-utbm.github.io/utbm_robocar_dataset/) for autonomous driving captured many new research challenges (e.g. highly dynamic environment), and especially for long-term autonomy (e.g. creating and maintaining maps), collected with our instrumented vehicle, publicly available to the community
    corecore